Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom (MCORE) FSDP interface #12391

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

youngeunkwon0405
Copy link
Member

Important

The Update branch button must only be pressed in very rare occassions.
An outdated branch is never blocking the merge of a PR.
Please reach out to the automation team before pressing that button.

What does this PR do ?

Add a one line overview of what this PR aims to accomplish.

Collection: [Note which collection this PR will affect]

Changelog

  • Add specific line by line info of high level changes in this PR.

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this 

GitHub Actions CI

The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.

The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

PR Type:

  • New Feature
  • Bugfix
  • Documentation

If you haven't finished some of the above items you can still open "Draft" PR.

Who can review?

Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.

Additional Information

  • Related to # (issue)

@@ -57,6 +57,8 @@
bucket_size: Optional[int] = None # Maximum number of parameters in each bucket
average_in_collective: bool = False # If true, compute average in collective directly, as opposed to dividing by the dp_size first and then computing sum in the collective
fp8_param_gather: bool = False # If true, keep the compute param in fp8 (do not use any other intermediate dtype) and perform the param all-gather in fp8
use_custom_fsdp: bool = False # If true, use MCore's custom FSDP implementation. recipe.model.config.gradient_accumulation_fusion must be False when using this
data_parallel_sharding_strategy: str = "no_shard" # Data parallel sharding strategy, choices=['no_shard', 'optim', 'optim_grads', 'optim_grads_params']
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this valid only with use_custom_fsdp=True?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then, wouldn't optim_grads_params be the default option fron NeMo side? So, use_custom_fsdp=True means Zero3.

Also, need to describe this in the comment description; Custom (MCORE) FSDP Data parallel sharding strategy, choices=['no_shard', 'optim', 'optim_grads', 'optim_grads_params']

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Modified the comment accordingly.

@@ -689,6 +689,12 @@ def init_ddp(self):
) # We need to do this explicitly since this is a attr pytorch uses
model_chunk.__class__.__getattr__ = getattr_proxy # type: ignore

# Ensure that if using custom FSDP, gradient_accumulation_fusion is disabled on the model config.
Copy link
Collaborator

@erhoo82 erhoo82 Feb 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should be changed to this.

gradient_accumulation_fusion cannot be used with MCORE FSDP

Also, gradient_accumulation_fusion shouldn't work with FSDP2 eight because we need to ReduceScatter the current gradient before accumulation local grad shard partial sum.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will modify the text as that. For the Torch FSDP2, I agree, but we don't have that in the NeMo yet.

@erhoo82
Copy link
Collaborator

erhoo82 commented Feb 26, 2025

Please change the name of PR to custom (MCORE) FSDP interface.

Also, need to add other FSDP args to the arg map.

keep_fp8_transpose_cache_when_using_custom_fsdp

suggested_communication_unit_size

init_model_with_meta_device

@erhoo82
Copy link
Collaborator

erhoo82 commented Feb 26, 2025

Are the args properly ported to MCORE modules?
I mean when launching with nemo2 interface.

@youngeunkwon0405 youngeunkwon0405 changed the title Add assertion to disable gradient_accumulation_fusion when using MCore FSDP Custom (MCORE) FSDP interface Feb 26, 2025
@youngeunkwon0405
Copy link
Member Author

Are the args properly ported to MCORE modules? I mean when launching with nemo2 interface.

Please change the name of PR to custom (MCORE) FSDP interface.

Also, need to add other FSDP args to the arg map.

keep_fp8_transpose_cache_when_using_custom_fsdp

suggested_communication_unit_size

init_model_with_meta_device

Okay, I will also add these.

@youngeunkwon0405
Copy link
Member Author

Are the args properly ported to MCORE modules? I mean when launching with nemo2 interface.

Yes, it was working well.

Signed-off-by: Youngeun Kwon <[email protected]>
@youngeunkwon0405
Copy link
Member Author

@erhoo82 I guess the init_model_with_meta_device will also require a fix from the NeMo side, so I didn't add it at this point.

@erhoo82
Copy link
Collaborator

erhoo82 commented Feb 26, 2025

@youngeunkwon0405 What fix does init_model_with_meta_device need? Would this be a functional fix?

@erhoo82
Copy link
Collaborator

erhoo82 commented Feb 26, 2025

Also, Can we set the default value of data_parallel_sharding_strategy to optim_grads_params? If we disable use_custom_fsdp , data_parallel_sharding_strategy is invalid anyway, right?

Not sure why no_shard should be the baseline given we all think Zero3 is the default.

@youngeunkwon0405
Copy link
Member Author

@youngeunkwon0405 What fix does init_model_with_meta_device need? Would this be a functional fix?

I was speculating like this because some modification was required in MLM https://gitlab-master.nvidia.com/ADLR/megatron-lm/-/merge_requests/2071/diffs#67e48551394c01c5c3935d0a3546c2404cfce8a4_521_549

We need to test it first.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants